We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by $\tilde{O}\left( \frac{k n_x}{HN_{\mathrm{shared}}} + \frac{k n_u}{N_{\mathrm{target}}}\right)$, where $n_x > k$ is the state dimension, $n_u$ is the input dimension, $N_{\mathrm{shared}}$ denotes the total amount of data collected for each policy during representation learning, and $N_{\mathrm{target}}$ is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.
translated by 谷歌翻译
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of $\mathcal{O}(C)$, where $C$ is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
translated by 谷歌翻译
基于学习的控制方案最近表现出了出色的效力执行复杂的任务。但是,为了将它们部署在实际系统中,保证该系统在在线培训和执行过程中将保持安全至关重要。因此,我们需要安全的在线学习框架,能够自主地理论当前的信息是否足以确保安全或需要新的测量。在本文中,我们提出了一个由两个部分组成的框架:首先,在需要时积极收集测量的隔离外检测机制,以确保至少一个安全备份方向始终可供使用;其次,基于高斯的基于过程的概率安全 - 关键控制器可确保系统始终保持安全的可能性。我们的方法通过使用控制屏障功能来利用模型知识,并以事件触发的方式从在线数据流中收集测量,以确保学习的安全至关重要控制器的递归可行性。反过来,这又使我们能够提供具有很高概率的安全集的正式结果,即使在先验未开发的区域中也是如此。最后,我们在自适应巡航控制系统的数值模拟中验证了所提出的框架。
translated by 谷歌翻译
在训练数据的分布中评估时,学到的模型和政策可以有效地概括,但可以在分布输入输入的情况下产生不可预测且错误的输出。为了避免在部署基于学习的控制算法时分配变化,我们寻求一种机制将代理商限制为类似于受过训练的国家和行动的机制。在控制理论中,Lyapunov稳定性和控制不变的集合使我们能够保证稳定系统周围系统的控制器,而在机器学习中,密度模型使我们能够估算培训数据分布。我们可以将这两个概念结合起来,产生基于学习的控制算法,这些算法仅使用分配动作将系统限制为分布状态?在这项工作中,我们建议通过结合Lyapunov稳定性和密度估计的概念来做到这一点,引入Lyapunov密度模型:控制Lyapunov函数和密度模型的概括,这些函数和密度模型可以保证代理商在其整个轨迹上保持分布的能力。
translated by 谷歌翻译
Reach-避免最佳控制问题,其中系统必须在保持某些目标条件的同时保持清晰的不可接受的故障模式,是自主机器人系统的安全和活力保证的核心,但它们的确切解决方案是复杂的动态和环境的难以解决。最近的钢筋学习方法的成功与绩效目标大致解决最佳控制问题,使其应用​​于认证问题有吸引力;然而,加固学习中使用的拉格朗日型客观不适合编码时间逻辑要求。最近的工作表明,在将加强学习机械扩展到安全型问题时,其目标不是总和,但随着时间的推移最小(或最大)。在这项工作中,我们概括了加强学习制定,以处理覆盖范围的所有最佳控制问题。我们推出了一个时间折扣 - 避免了收缩映射属性的贝尔曼备份,并证明了所得达到避免Q学习算法在类似条件下会聚到传统的拉格朗郎类型问题,从而避免任意紧凑的保守近似值放。我们进一步证明了这种配方利用深度加强学习方法,通过将近似解决方案视为模型预测监督控制框架中的不受信任的oracles来保持零违规保证。我们评估我们在一系列非线性系统上的提出框架,验证了对分析和数值解决方案的结果,并通过Monte Carlo仿真在以前的棘手问题中。我们的结果为一系列基于学习的自治行为开放了大门,具有机器人和自动化的应用。有关代码和补充材料,请参阅https://github.com/saferoboticslab/safett_rl。
translated by 谷歌翻译
部分观察到的马尔可夫决策过程(POMDP)是一种强大的框架,用于捕获涉及状态和转换不确定性的决策问题。然而,大多数目前的POMDP规划者不能有效地处理它们经常在现实世界中遇到的非常高的观测(例如,机器人域中的图像观察)。在这项工作中,我们提出了视觉树搜索(VTS),一个学习和规划过程,将生成模型与基于在线模型的POMDP规划的脱机中学到的。 VTS通过利用一组深入生成观测模型来预测和评估蒙特卡罗树搜索计划员的图像观测的可能性,乘坐脱机模型培训和在线规划。我们展示VTS对不同观察噪声的强大稳健,因为它利用在线,基于模型的规划,可以适应不同的奖励结构,而无需重新列车。这种新方法优于基线最先进的策略计划算法,同时使用显着降低的离线培训时间。
translated by 谷歌翻译
我们概述了新兴机会和挑战,以提高AI对科学发现的效用。AI为行业的独特目标与AI科学的目标创造了识别模式中的识别模式与来自数据的发现模式之间的紧张。如果我们解决了与域驱动的科学模型和数据驱动的AI学习机之间的“弥补差距”相关的根本挑战,那么我们预计这些AI模型可以改变假说发电,科学发现和科学过程本身。
translated by 谷歌翻译
探讨了将数据驱动对象检测器的不确定性结合到对象跟踪算法中的不确定性的方法。对象跟踪方法依赖于测量误差模型,通常以测量噪声,假阳性率和错过检测速率的形式。通常,这些数量通常可以取决于物体或测量位置。然而,对于从神经网络处理的摄像机输入产生的检测,这些测量误差统计不足以表示主要错误源,即运行时传感器输入与检测器训练的训练数据之间的不相似性。为此,我们调查将数据不确定性纳入物体跟踪方法,例如提高跟踪物体的能力,特别是那些超出的能力。培训数据。所提出的方法在对象跟踪基准上验证以及具有真正自治飞机的实验。
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
Underwater images are altered by the physical characteristics of the medium through which light rays pass before reaching the optical sensor. Scattering and strong wavelength-dependent absorption significantly modify the captured colors depending on the distance of observed elements to the image plane. In this paper, we aim to recover the original colors of the scene as if the water had no effect on them. We propose two novel methods that rely on different sets of inputs. The first assumes that pixel intensities in the restored image are normally distributed within each color channel, leading to an alternative optimization of the well-known \textit{Sea-thru} method which acts on single images and their distance maps. We additionally introduce SUCRe, a new method that further exploits the scene's 3D Structure for Underwater Color Restoration. By following points in multiple images and tracking their intensities at different distances to the sensor we constrain the optimization of the image formation model parameters. When compared to similar existing approaches, SUCRe provides clear improvements in a variety of scenarios ranging from natural light to deep-sea environments. The code for both approaches is publicly available at https://github.com/clementinboittiaux/sucre .
translated by 谷歌翻译